Skip to content

feat(stack): reclaim leaked dev k3d networks on obol stack purge#393

Closed
OisinKyne wants to merge 1 commit intooisin/377-3from
oisin/377-4
Closed

feat(stack): reclaim leaked dev k3d networks on obol stack purge#393
OisinKyne wants to merge 1 commit intooisin/377-3from
oisin/377-4

Conversation

@OisinKyne
Copy link
Copy Markdown
Contributor

Each k3d cluster create reserves a /16 from Docker's predefined
172.16.0.0/12 pool (~16 networks max). In OBOL_DEVELOPMENT mode the
pull-through registry-mirror containers
(k3d-obol-{docker,ghcr,quay}-io.localhost) attach to every cluster
network and are NOT disconnected by k3d cluster delete, so the
network leaks. After ~16 dev cycles the pool exhausts and every
obol stack up fails with "all predefined address pools have been
fully subnetted".

flows/lib.sh has had cleanup_k3d_obol_networks for this since
c08c873/34e62e5; lift the same logic into Go and run it from Purge.

obol stack down is intentionally not touched — it calls
k3d cluster stop which preserves the network for Up to resume.
The rare graceful-stop-fallback-to-delete path inside Down is
covered the next time Purge runs.

Live clusters (any *-serverlb or *-server-N attachment) are
skipped, so this is safe alongside other running stacks. Mirror
containers auto-rejoin the next cluster's network on the next
obol stack up, so disconnecting them is non-destructive for the
cache.

Co-Authored-By: Claude Opus 4.7 (1M context) noreply@anthropic.com

@bussyjd
Copy link
Copy Markdown
Collaborator

bussyjd commented Apr 29, 2026

Superseded by #386 — all commits from this branch are already present at the tip of integration/pr377-pr381 (verified by git log <branch> ^origin/integration/pr377-pr381 returning empty). Closing to keep the queue tidy. Track final landing on main via #386.

@bussyjd bussyjd closed this Apr 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants